Superthoughts Lite V2 MOE Llama3.2 GGUF
Superthoughts Lite v2 is a lightweight Mixture of Experts (MOE) model based on the Llama-3.2 architecture, focusing on reasoning tasks to provide higher accuracy and performance.
Large Language Model Supports Multiple Languages